natural accuracy
- North America > United States (0.14)
- Asia (0.04)
- Information Technology > Security & Privacy (0.47)
- Government (0.47)
Eliminating Catastrophic Overfitting Via Abnormal Adversarial Examples Regularization
However, SSA T suffers from catastrophic overfit-ting (CO), a phenomenon that leads to a severely distorted classifier, making it vulnerable to multi-step adversarial attacks. In this work, we observe that some adversarial examples generated on the SSA T -trained network exhibit anomalous behaviour, that is, although these training samples are generated by the inner maximization process, their associated loss decreases instead, which we named abnormal adversarial examples (AAEs).
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- North America > Canada > British Columbia > Vancouver Island > Capital Regional District > Victoria (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Leisure & Entertainment (0.67)
- Information Technology (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Search (0.43)
- Asia > Nepal (0.04)
- Asia > China > Jiangsu Province > Nanjing (0.04)
- Asia > China > Chongqing Province > Chongqing (0.04)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Data Science > Data Mining (0.93)
Supplementary Materials of Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks
We evaluate the identified RSTs' robustness against more attacks on top of two networks on CIFAR-10 as a complement for Sec. As observed from Tab. 1, we can see that the RSTs searched by PGD-7 training are also robust against other attacks. As observed in Figure 1, RSTs drawn from randomly initialized networks achieve a comparable natural accuracy with the RTTs drawn from naturally/adversarially trained networks and adversarial RTTs generally achieve the best natural accuracy. Trained), (2) adversarially trained dense models (Dense Adv. Trained 70.70 74.35 77.20 77.71 75.55 79.22 78.85 77.33 0 81.28 Dense Adv.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- (2 more...)
- North America > United States > Virginia (0.04)
- Asia > China (0.04)
Studying Various Activation Functions and Non-IID Data for Machine Learning Model Robustness
Dang, Long, Hapuarachchi, Thushari, Xiong, Kaiqi, Lin, Jing
Adversarial training is an effective method to improve the machine learning (ML) model robustness. Most existing studies typically consider the Rectified linear unit (ReLU) activation function and centralized training environments. In this paper, we study the ML model robustness using ten different activation functions through adversarial training in centralized environments and explore the ML model robustness in federal learning environments. In the centralized environment, we first propose an advanced adversarial training approach to improving the ML model robustness by incorporating model architecture change, soft labeling, simplified data augmentation, and varying learning rates. Then, we conduct extensive experiments on ten well-known activation functions in addition to ReLU to better understand how they impact the ML model robustness. Furthermore, we extend the proposed adversarial training approach to the federal learning environment, where both independent and identically distributed (IID) and non-IID data settings are considered. Our proposed centralized adversarial training approach achieves a natural and robust accuracy of 77.08% and 67.96%, respectively on CIFAR-10 against the fast gradient sign attacks. Experiments on ten activation functions reveal ReLU usually performs best. In the federated learning environment, however, the robust accuracy decreases significantly, especially on non-IID data. To address the significant performance drop in the non-IID data case, we introduce data sharing and achieve the natural and robust accuracy of 70.09% and 54.79%, respectively, surpassing the CalFAT algorithm, when 40% data sharing is used. That is, a proper percentage of data sharing can significantly improve the ML model robustness, which is useful to some real-world applications.
- North America > United States > Florida > Hillsborough County > Tampa (0.14)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- North America > United States > Florida > Hillsborough County > University (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Overview (0.92)
- Information Technology > Security & Privacy (1.00)
- Education (1.00)